![]() METHOD FOR DETERMINING THE DISPERSION OF THE TENACITY AND THE TEMPERATURE OF FRAGILE-DUCTILE TRANSIT
专利摘要:
The invention relates to a computer-implemented method for determining the dispersion of the toughness of a steel product subjected to thermal variations comprising: a step (300) for acquiring tenacity measurements made on a plurality specimens taken from at least one steel product, of acquisition of the size of the specimens on which said measurements were made, of acquisition of the temperature corresponding to each measurement of toughness; a step (301) for selecting the functional forms α (T), β (T), γ (T) of the parameters of a Weibull law representative of the dispersion of the toughness of the steel for a given temperature T, a a given functional form being defined using at least one coefficient; a step (302) for estimating the coefficients of the functional forms α (T), β (T), γ (T) of the Weibull model so as to maximize the likelihood for the data obtained at the acquisition step, said likelihood being representative of the adequacy between the distribution observed on said data and a theoretical distribution to be adjusted corresponding to the functional forms α (T), β (T), γ (T). 公开号:FR3020681A1 申请号:FR1453929 申请日:2014-04-30 公开日:2015-11-06 发明作者:Nadia Perot;Nicolas Bousquet;Michel Marques 申请人:Electricite de France SA;Commissariat a lEnergie Atomique CEA;Commissariat a lEnergie Atomique et aux Energies Alternatives CEA; IPC主号:
专利说明:
[0001] The invention relates to a method and a device for determining the dispersion of the toughness and the temperature. The invention relates to a method and a device for determining the dispersion of the tenacity and the temperature. brittle-ductile transition of a steel component subjected to thermal variations. It applies to the field of characterization of metal alloys. [0002] Many industrial production systems require the use of steel components, such as ferritic steel, subject to changing thermal environments. These thermal variations can accompany or, on the contrary, slow down the production process for reasons of safety and maintenance of the integrity of the components. [0003] Thus, the walls of the Pressurized Water Reactor (PWR) tanks of the French Nuclear Park, consisting of several steel grades, such as the vast majority of the world's fleet reactors, must, in accidental situations, be cooled by safety injections while still under pressure. The injected cold water causes pressurized thermal shock pressure PTS (pressurized Thermal Shock Transient), which is likely to weaken the component. Indeed, ferritic steel is a naturally brittle material, whose resistance to mechanical stress is characterized by two sizes: resilience (usually referred to as "strength") and toughness (usually referred to as "strength"). fracture toughness). Resilience defines its ability to deform under impact, while toughness sums its ability to withstand shocks without priming and then spreading a crack until it breaks. Any mechanical part indeed contains cracks due to the manufacturing process. If these quantities are both measured by tests on Charpy sheep or by Pellini tests meeting precise standards, the tenacity of steel is of particular importance in industrial risk studies since the initiation then the growth of cracks can lead to at the break of the component. Thus, the quantification of the toughness of steel appears to be fundamental in the safety studies of PWR tanks. Many industries are also concerned by this problem, such as the naval industries (hull coatings), automobiles (bodywork, engines), aircraft (engines) and rail (rails). The tenacity of ferritic steels follows, as a function of the evolution of the temperature of the medium, an average so-called brittle-ductile ("brittle-ductile") evolution: the tenacity increases with the temperature, passing from a brittle state said fragile to a more flexible state called ductile, allowing plastic deformation. Moreover, around this average curve, the tenacity knows a natural dispersion. This dispersion results from the definition of toughness itself and the technical means to measure it. Indeed, crack initiation is related to the breaking of crystal lattices in the material. According to classical theories of fracture mechanics, such as that of the weak link, the crack starts at defects of crystalline lattices, which correspond to microstructural elements whose distribution depends at random on the manufacturing process. More precisely, these elements constitute metallurgical heterogeneities, that is to say, microstructure gradients within the same product, particularly related to the carbon content and the local cooling rate during metallurgical transformations. At constant temperature, the tenacity is therefore intrinsically uncertain since it depends on a crack initiation occurring randomly. To this intrinsic uncertainty is added a certain measurement operation noise, due to the use of the pendulum sheep during the Charpy tests (see Figure 1) to produce the direct measurements necessary for the construction of a data of tenacity. These are obtained by testing the robustness of test pieces taken from the component, or reproducing its chemical composition, by destructive tests. Since the mechanical stress may be under or over-sized, taking into account the errors during measurement operations leads to a gradation of the tenacity data, or even to reject some as non-compliant (case-limits of toughness). [0004] These data are then discarded from studies aimed at precisely quantifying the dispersion of the tenacity and its average value. The comparison of the resistance of the different grades of steel, for example usable for the same component, requires for each a summary of this brittle-ductile transition. As a reminder, the grade of an alloy 10 is the specification of its chemical composition. The most used tool is the notion of transition temperature between fragile and ductile states. It defines a point of inflection of the curve of average tenacity. The lower the temperature, the better the ductility properties of the steel, the more plastically deformable it can be under stress. Concomitantly, fracture toughness is defined as the tenacity distribution observed at the transition temperature. Historically, the determination of this temperature appeared to be a necessity for the designers of the "Liberty Ships" cargo ships built in series in the United States during the Second World War whose steel hull threatened to crack in cold waters. Moreover, in the particular case of steels used in the nuclear industry, the irradiation causes embrittlement of the component, which results in an increase in the temperature separating the brittle behavior of the material from the ductile behavior. Under the effect of the neutron flux, it increases by a shift depending on the fluence, that is to say the intensity of a stream of radiative particles, and the concentration of certain chemical components of the vat steel such as phosphorus, copper and nickel. Accordingly, in order to predict the evolution of the fragility of an irradiated component, it is essential to measure the transition temperature shift, and thus to characterize the initial transition temperature. Depending on the measurement accuracy of such an offset, operating margins can be taken. By definition even of the experimental method of measuring toughness, the transition temperature is not directly measurable in these destructive experiments. A temperature range can be roughly determined, but the tests are expensive, especially on irradiated steels. It therefore seems essential to conduct a quantification based on the complete description of the behavior of the tenacity, that is to say of its mean curve and the uncertainty that is exerted on this curve. These two elements constitute, moreover, an important input for so-called structural reliability studies designed to reproduce the response of the component in the potentially accidental situation where a sudden thermal variation acts on it. These studies are generally conducted by the use of a code simulating this behavior. The state of the art mainly comprises two methods for estimating the transition temperature between the brittle and ductile states of steel. A first method of determining the Reference Temperature for Nil Ductility Transition (RTNDT) is formulated by the American Society of Mechanical Engineers (ASTM) in a set of documents, called The International Booster and Pressure Vessel Code, which establishes rules for safety governing the design, manufacture and inspection of boilers and pressure vessels, as well as nuclear power plant components during construction phases (i.e., without irradiation). This standardized method (Standard ASTM Standard E208) consists of approaching empirically a cracking temperature observed on several test-specimens under precise cracking conditions. It should be noted that the RTNDT method characterizes a finished product, defined by carrying out a casting, a shaping, and a heat treatment. For example, in the case of the components of a PWR plant, two tubings from the same ingot constitute two products. Two ferrules from two different flows also constitute two different products. A second method is a significant refinement of the previous method and leads to the determination of the To temperature, whose determination is also subject to a standardized procedure, described in ASTM E 1921, Test Method for Determination of Reference. Temperature, TO, for Ferritic Steels in the Transition Range; ASTM International. This non-empirical approach is based on statistical modeling of the physics of plastic deformation at the crack tip. This modeling seeks to define the nature of the dispersion of the toughness in order to characterize the median and the dispersion of tenacity of a ferritic steel. More precisely, the tenacity is then modeled, at constant temperature, by a Weibull law with three parameters deduced from the weak link theory, whose distribution function is written in the following manner: F (k, a, Ko-K., K.1) = P (KIC> k) = 1-exp (k - Kmin 0 - Kmin expression in which: - a is a shape parameter set to four; - Kr_ is a the parameter parameter (location parameter) set at 20 MPa inu2 the value of a is a direct consequence of the weak link theory, whereas that of Kr_ is derived from a graphical estimate made from a steel base from the EPRI - Ko is a scale parameter This parameter is a function of the temperature T, and its expression is defined according to that of the so-called standardization toughness for the 25 mm test specimens. thickness, which is written: Ko (Tr) = 31+ 77xexp (0.019x (T,)) where T, is the temperature, possibly offset such that T = T - RTNDT when grouping different products, in order to conduct a homogeneous statistical analysis. [0005] The temperature To is then defined as the temperature corresponding to an average tenacity of 100 MPa. mi '. To calculate it at the product level it is necessary to solve the following equation: 1 Ko (To - RTNDT) F (1 + -) + Kmin (To - RTNDT) = 100 MPa. m1 / 2 a This method of determining transition temperature has been popularized around the world as the "Master Curve" and has the acronym MC. K. Wallin's article, Irradiation damage effects on the fracture toughness transition curve for steelside fracture, published in August 1991, describes this method precisely. The MC approach is richer than the first approach, since it first produces an explanatory model for the dispersion of toughness. However, this method has several limitations. A first limitation concerns the assumption of homogeneity which requires that the data come from a single material. It is related to the fact that the Master Curve method is based on the weakest link theory in which the material is supposed to contain randomly distributed defects. The strong hypothesis of macroscopic homogeneity makes it possible to make that of a uniform and isotropic tenacity. In addition to macroscopic homogeneity, the material is assumed to have a substantially monophasic micro-structure. If we move away from these assumptions, the impact can be significant on the validity of the model of the Master Curve. A second important limitation concerns the temperature validity window of the toughness model estimated by the Master Curve method. The model is considered valid for ± 50 ° C. Outside this range, validity is no longer assured. A third limitation is that the estimate of To is made on a sample of test temperature data corresponding to tenacity values close to 100 MPa · m 1/2. Therefore, if the variability of the sample of temperature data is large, the uncertainty on To is also as: A.T0.4Z, r with 15 - fl = 18 ° C for a median toughness greater than 83mPa .m ', - r = the number of valid data to estimate To, - Za: the confidence level (Z90% = 1.64). For example for a confidence level of 90% and a dozen data, AT ° = 9.33 ° C. One of the aims of the invention is to remedy shortcomings / drawbacks of the state of the art and / or to make improvements thereto. To this end, the object of the invention is a computer-implemented method for determining the dispersion of the toughness of a steel product subjected to thermal variations, comprising: a step of acquiring toughness measurements carried out on a plurality of test pieces taken from at least one steel product, of acquisition of the size of the specimens on which said measurements have been made, of acquisition of the temperature corresponding to each measurement of toughness; a step of selecting the functional forms a (T), / 3 (T), y (T) of the parameters of a Weibull law representative of the dispersion of the toughness of the steel for a given temperature T, a functional form data being defined using at least one coefficient; a step of estimating the coefficients of the functional forms a (T), / 3 (T), y (T) of the Weibull model so as to maximize the likelihood for the data obtained at the acquisition step, said likelihood being representative of the adequacy between the observed distribution on said data and a theoretical distribution to be adjusted corresponding to the functional forms a (T), / 3 (T), y (T). In one embodiment, the functional forms are selected using the results obtained following the application of the acquisition step. In one embodiment, a step of downsampling the toughness measurements obtained at the step of acquiring measurements so as to estimate the parameters of the Weibull law over predefined temperature intervals. In one embodiment, the Weibull law parameters are estimated for a temperature range identified in the subsampling step relative to a reference temperature T'f associated with this interval. In one embodiment, the reference temperature T'f of an interval is the average temperature of said interval. In one embodiment, the reference temperature T'f of an interval is the median temperature of said interval. [0006] In one embodiment, sequential subsampling is performed such that each subsample of data corresponds to the data associated with a temperature subinterval. In one embodiment, sliding subsampling is implemented such that each subsample of data corresponds to the data associated with a temperature subinterval. In one embodiment, a local estimates step of the three parameters a (T'f), fl (T'f), y (Tref) of Weibull's law on each reference temperature subinterval T'f obtained by sub-sampling. In one embodiment, the local estimate is implemented using the method of moments. In one embodiment, the local estimate is implemented using the maximum likelihood method. In one embodiment, the local estimate is implemented using a combination of the method of moments and the method of maximum likelihood. In one embodiment, the step of selecting the functional forms is carried out on the basis of k estimates of a, lc and y, a function on these data being adjusted accordingly as a function of the temperature, said function being chosen from a linear constant function, a quadratic function, an exponential function with two coefficients or an exponential function with three coefficients. In one embodiment, a step of determining the variation intervals of the coefficients of the functional forms identified at the step of selecting the functional forms a (T), / 3 (T), y (T), these intervals being determined by a method of statistical inference of bootstrap type. In one embodiment, an evaluation step of the functional model adjusted with respect to the number of parameters and data used, said model being the result of the estimation of the coefficients, the evaluation of this model being carried out by applying it a test of, 1; conditional. In one embodiment, a step of determining the brittle-ductile transition temperature, said temperature corresponding to the value of the temperature obtained when the average dispersion pattern of the toughness is equal to a reference value of 100 MPa. In one embodiment, the coefficients of the functional forms a (T), / 3 (T), y (T) of the Weibull model are estimated by maximizing a cost function, said function corresponding to the log likelihood of the theoretical Weibull model applied to previously acquired tenacity measurement data, the Weibull model being characterized by three parameters defined by the functional forms a (T), / 3 (T), y (T) defined from local estimates made on a sub-sampling with respect to temperature In one embodiment, the acquisition step also includes an acquisition of the level of relevance of the other acquired data. its, these levels being chosen to indicate that the data are valid, censored on the left or censored on the right. The invention also relates to an electronic system comprising at least one hardware module implementing the method described above. The invention also relates to a computer program comprising instructions for executing the method described above, when the program is executed by a data processing module. Other features and advantages of the invention will become apparent from the following description given by way of nonlimiting illustration, with reference to the appended drawings in which: FIG. 1 represents a pendulum sheep used for tests Charpy in order to produce direct measurements necessary for the construction of a tenacity data; FIG. 2 gives an example of toughness measurements which testify to the intrinsic variability, for a given temperature value, of this tenacity To; Figure 3 schematically illustrates a method for determining the dispersion of the toughness of a steel product subjected to thermal variations; FIG. 4 shows a technique exploiting the scale invariance property of the Weibull law to pass from a tenacity measurement carried out with a sample of size Be to the same measurement carried out with a reference specimen size Bo. Fig. 5 is a diagram illustrating a way in which the functional models of the Weibull law parameters can be selected; FIG. 6 gives an example of implementation of the second phase of the method for determining the dispersion of the tenacity according to the invention, said phase being applied after selection of the functional forms; Figure 7 presents a genetic algorithm for solving the optimization problem of searching for Weibull law a (T), / 3 (T), y (T) parameter values. Fig. 8 is a diagram illustrating in a simplified manner a validation technique of the adjusted functional model; FIG. 9 illustrates the division of the space of a variable Y I T; FIG. 10 shows an embodiment of the invention comprising the optional steps of sub-sampling, local estimation and determination of the variation intervals of the coefficients of the functional forms; FIG. 11 gives an example of an electronic system that can implement the method according to the invention. Figure 1 shows a pendulum sheep used for Charpy tests to produce direct measurements needed to construct a toughness data. The device comprises, a hammer 100, a support 101, the test piece 102 on which the test is performed and a dial 103. FIG. 2 gives an example of tenacity measurements as well as the associated brittle-ductile temperature T0 = -30 ° these measurements and the dispersion 200 of these measurements at this temperature. The invention is implemented in two main phases. The first phase aims to select a parametric structure and the second phase aims to determine the value of the parameters of this structure. The method according to the invention is based on experimental data and advantageously allows to integrate all the available data from destructive experiments. The method according to the invention makes it possible, from a set of physical toughness measurements whose relevance can be graded according to pre-existing standards, to provide an estimator of the brittle-ductile transition temperature of a steel. ferritic and characterize the dispersion of its toughness around its mean curve by a statistical modeling tool. These two quantities respectively make it possible to classify steel grades according to their resistance to sudden stresses such as shocks and to offer a fundamental input to structural reliability studies of steel components. In a very advantageous manner, the use of data of different types from sheep-pendulum experiments can be used as inputs of the method according to the invention. [0007] Figure 3 schematically illustrates a method for determining the dispersion of the toughness of a steel product subjected to thermal variations. This method comprises a step 300 for acquiring toughness measurements made on a plurality of specimens taken from a steel product, acquiring the size of the specimens on which said measurements were made and acquiring the temperature corresponding to each measurement of toughness. In this specification, a steel product refers to a finished article obtained by performing casting, shaping and heat treating. The method also comprises a step 301 of selecting the functional forms of Weibull law parameters a (T), / 3 (T), y (T) representative of the dispersion of steel toughness for a temperature. T given, a given functional form being defined using at least one coefficient. Steps 300 and 301 can be applied in any order. In one embodiment, the functional forms are arbitrarily chosen. Alternatively, the functional forms are selected using the results 304 obtained following the application of the acquisition step 300. In this case, the step 300 must be applied before the step 301. A step 302 then determines the coefficients of the functional forms. For this, the coefficients of the functional forms a (T), 30/3 (T), y (T) of the Weibull model are estimated in such a way that the likelihood is maximal for the data, the said likelihood measuring the adequacy between the observed distribution on the data and a theoretical distribution to be adjusted. This means that the product of the densities for the theoretical law applied to the data will be maximal in the case where this law fits well to the data. [0008] In one embodiment, a cost function can be maximized. This cost function corresponds to the log-likelihood of Weibull's theoretical model, the functional parameters of which have a form defined in step 302 from the local estimates made on the subsampling, applied to the data of tenacity measurements acquired during from step 300. The dispersion model of the tenacity is then obtained in step 303. The brittle-ductile transition temperature can be determined by identifying the value of the temperature for which the average dispersion model tenacity takes a reference value of 100 MPa.m1 "2. The average dispersion model of the tenacity is an explicit function of the coefficients of the functional forms when they are known, Figure 4 presents a technique exploiting the property of scale invariance of the Weibull law to pass from a measurement of tenacity realized with a test-tube of size Be to the same measure e made with a reference specimen size Bo. Under the assumption that at constant temperature, the tenacity follows a Weibull law, we can go from a KIC_Be tenacity measurement made with a test piece of Be size to the same Kic_Bo measurement carried out with a reference sample size Bo thanks to the following relation, which is due to the scale invariance property of the Weibull law: Kic-Bo = Krnm ± (Kic-Be-K ..)> <(a (1) When a and K ,,,, 'are fixed at known values (for example, in the Wallin Master Curve model, which requires that a = 4 and Knim = 20MPAITn), the adjustment uses the data normalized by the relation (1). On the other hand, if this is not the case, this normalization of the data must be introduced into the likelihood term of the model under consideration.The term of statistical likelihood (density) of a data item kia, considered as correct, is then expressed by the following expression: (fKic (ki, T) (KB-K ',,') (T) j (x exp (1 (a-1) - K ',,' (T)) (13 ,, T a (K ## STR1 ## wherein: Bo denotes the thickness of the standard specimen, T denotes the size of the measured specimen. The so-called Kic measurements, the obtaining procedure of which is the subject of the ASTM E399-90 standard, are usually considered to be correct, as are the Kjc measurements of indirect elasto-plastic energy, which attempt to overcome the non-compliance with linear constraints necessary for the validity of the mechanical theory underlying the obtaining of Kic Furthermore, experimental data can be obtained for different sizes of test specimens and test temperatures, which, however, can not be considered as data of correct tenacity, correspond to bounds on an observation of missing toughness, according to the criteria of hierarchization proposed in the article of Houssin, B., Langer, R., Lidbury, D., Planman, T., Wallin, K. entitled Unified reference fracture toughness design curves for reactor pressure vesse / steels, EE / S.01.0163 Rev. B Final Report, 2001. They provide statistical information called censoring. The so-called Kcm, K cpm and K mAx data (hereinafter collectively referred to as Kcm) come from quasi- valid destructive experiments, and can be considered as minimum bounds for the expected toughness value, ie to say with a censorship on the right. The statistical likelihood term of a data item kC, ltT obtained with a test piece of size Bi a- at the test temperature T is then expressed as follows: (1 - K ', i' (T)) (B (K0 K ', i') (T) x 0 P (Kic> = exp (3) 15 Conversely, the data called K Jc_hymte correspond to: either an overshoot of the range where the measure of toughness is relevant, and are upper bounds for the expected toughness value, i.e. with left-hand censoring The statistical likelihood term of a data obtained with a specimen of size Bi a- at test temperature I is then: (ce 1 kj, K, '' (T)) B (K0 K, '') (T) x Bo P (K IC <k jc lim, T) -1- exp (4) either at a value measuring an experimental limit due to the impossibility of measuring a measure of tenacity that is relevant in the considered temperature conditions, in which case this datum constitutes a censor data on the right, with the same statistical significance as the data Next, the specimen size correction is used for the local estimates and the overall fit, while the censored data is introduced only in the overall fit. [0009] Figure 5 is a diagram illustrating a way in which functional models can be selected. In this example, a first subsampling step of the toughness measurements 500 is followed by a local estimation step 501. The selection 502 of the functional models is then performed. This example corresponds to an embodiment in which the measurements acquired in step 300 are processed 500, 501 to be subsequently used in a step 304 for the selection of functional models 301, 502. It should be noted that the steps 500 and 502 501 described below are optional. [0010] When the dispersion of the tenacity data is important relative to the temperature, it is particularly useful to be able to estimate the parameters of the Weibull law over a reduced temperature range and the estimate will be made relative to the reference temperature of this interval. This reference temperature is either the average temperature or the median temperature. To process the entire temperature domain of the database, two types of downsampling of tenacity data based on the division of the temperature domain can be performed 500, so that each subsample of data corresponds to to data associated with a temperature sub-interval: sequential subsampling and sliding subsampling. An estimate 501 of the parameters of the Weibull law is then carried out on each subsample. The local estimates obtained then make it possible to select 502 a functional model of the temperature for each parameter of the probabilistic model. [0011] The optional sub-sampling step 500 implements sequential sub-sampling. This step consists of cutting the temperature domain r from the database into N consecutive consecutive subintervals with the same thermal amplitude AT. [0012] Next, step 500 implements sliding subsampling consisting of cutting the IT temperature domain from the database into N consecutive subintervals AT. It is then a question of constructing a first temperature interval 1, which starts with the lowest temperature T1 of the temperature range up to the temperature Ti + AT and the next subinterval 12 is obtained by sliding the subinterval / 1 of a shift dT. Thus, 12 = [Ti dT, T1 + dT + AT]. This operation is repeated until sub-interval 1 reaches the maximum temperature of IT. The method can then implement a local estimation step 501. This step aims to perform a local estimation of the three parameters of the Weibull probabilistic model on each subinterval obtained by sequential or sliding-subsampling subsampling. For this, three alternatives can be implemented in particular taking into account the specimen size associated with each tenacity data. A first alternative for local estimation is to use the method of moments. For any sample k, the method of moments is defined by the following algorithm. For all ah (shape parameter) varying over the interval [a; b with a step h, the estimate of the parameters (Ko -K ',') and Km, 'is calculated by the method of moments defined as follows: / 1 = 1 -1 (K IC-B0.1 1V 1-1 min) (1 ', = (Ko-K,') xF 1+ a) (5) 0- = - AIC-Bo.i -K min 1V 1-1 1 N (r '. / 1 ) 2 = (K0 - K min xF (2/12 a) Taking into account the correction of the size of the specimens necessary for the homogenization of the data on each sample k of size N, we obtain we put K. .. = the , the minimum tenacity of the database: 1 Nk ((BD Kmin-KmB, '7') x D 1 KBiD + Nk ± (Kic_Be, KmBiD -h 1 = 1 (KIC-, 1- 13 (n) ## EQU1 ## Ik D. -h Km13: 7, ± (Kic_Be, i-KmBi'DX-lik Uk = -7 ', 2.4 Nk 1 = 1 0 (= (K0-Kmm) 2 X r 1+ 2 - - // k2 , h, / The local estimates (K0 -K ,,,,, ') k and (K ,,,,,, i) k are then deduced from the system of equations (6): (K0 -Kmin) k = ## EQU2 ## where F 1 + ak 1 + ak h / (Kmin) k (1 = duk - (K0 - Km.) x F 1 + - ak / (7) Se the triplets (Oth, (K0 which have a physical meaning are retained. An adjustment test criterion is then determined for each sample and only the triplet (ah *, (Ko-Kmu) k, (Kjh which minimizes the criterion of the fit test is retained.) A second alternative allowing the local estimation The likelihood is defined by the product of the densities at the N data points: VIC ,,,, ah, (K0-1 (,,,)) (X) = 1-1 i = 1 Ko - Kmin Kmin K0- Kmin Kmin with X = (xl, x2, ..., xN) The maximum likelihood method is to search for the estimators of (ah, (K0-K ''), K ' ') which make maximum likelihood L That which is equivalent to looking for estimators that maximize the log likelihood function: N ln [L (Khhh, ah, (K0-Knh')) (x)] = Iln [f ( The method then consists in varying a scale parameter tn over the interval [a; b with a step h. Parameter estimates Ko-K ',' and a are then solutions of the following system: a (ln ( L (Km in, a, (K0-Knzin))) (X) = 0 aoe (9) (1n (aL) (Kinz 'a, (K0-Knin)) (X) a (Ko-K,') For any sample k, the maximum likelihood method is adapted to account for the correction by specimen size. The algorithm used can then be as follows: Kin (scaling parameter) varying over the interval [a; b with a step h, Xk the data of the subsample k; - if Kmhin = a + h, we search for solutions (K0 and ak of the system: Vi = 1, ..., N k, = (xk - Kmm) xa (ln (L (K ": '', a, (K0 - Knzin))) (5ik) aoe a (ln (L ((en'zi ,,, a, (K0-Knzin)) (5ik) a (Ko-K, ') - otherwise we search for solutions ( K0 -K ,,,) k and ak of the system (10) with: = 0 r B. vBo / = 0 (10) = 0 .7cik -KL) x B. ak-i 0 - Calculation of the test criterion d adjustment and selection of the triplet (ak, (Ko-Kmirfk, (K) k) which minimizes the statistical criterion The adjustment criterion is described later in the description A third alternative allowing the local estimation consists of 5 This combined method takes place in two stages and combines the method of moments and the method of maximum likelihood.First, the equations of the method of moments are put, which allows to have an expression of Kn, '' as a function of 10 (K0-Kmj and a thanks to the first moment, which is then introduced into the equation of the second moment to obtain an e xpressions of a versus averages and empirical variances of toughness and product 4 / B0. Kn and / ç are thus obtained as a function of a. In a second step, an estimate of a is computed by the maximum likelihood method applied to a Weibull law whose parameters are (a, (1 () - K.) (A), K ',' (a )). The estimate of a is then injected into the expressions of (Ko-Km, ') and Kn from the method of moments. This method is applied to each sub-sample k, which makes it possible to obtain an estimated triplet (ak, (Ko-Kmijk, (Kmin) k) for each of them. The moments and the maximum likelihood method are described below: The method of moments and the method of maximum likelihood produce several triplets solutions, of which one must be chosen according to a valid statistical criterion. The Cramer Von Mises criterion, the Kolmogorov-Smirnov criterion and the Anderson-Darling criterion are all used to determine the triplet that minimizes the associated statistic. [0013] For the Cramer-Von Mises criterion, this statistic measures the average difference between the estimated distribution function and the empirical distribution function: 1 - NIV = 12N + s + - + s + - (x) - F (x ) 2 with J, the empirical distribution function and F, the estimated distribution function. Which amounts to selecting the triplet (a *, (Ko-Kmin) * in) which minimizes NW = + 2 1 T-, n 12N t 2i -1 F (x (i; (a, (Ko-Kmin) , Kmin)) 2 _ 2N where x (i) are strictly ordered tenacity data. [0014] For the Kolmogorov-Smirnov criterion, this statistic measures the maximum difference between the estimated distribution function and the empirical distribution function: Dn2 = SupFN * (x) - F (x) 1 Only the triplet (a * (Ko-Kmin) * in) which minimizes 4 "defined below will be retained: Dn2 = SupFN * (x (i) - F (x (i); (a, (K0-K min), K min) With respect to the Anderson-Darling criterion, this statistic measures the difference between the estimated distribution function and the empirical distribution function by giving more importance to the distribution tails: A2 = N - S 2i - 1 r S [1n (F (x (i))) + 1n (1- F (x0 '+'))) 1 i = i N Only the triple (a *, (Ko-K, J, K: in ) which minimizes the A2 statistic will be retained. [0015] Step 502 of selection of functional models for the three parameters of the Weibull law can be implemented as described below. After the step of calculating the local estimates 501 of the three parameters of the Weibull distribution for each of the sub-samples, k estimates of a (T), / 3 (T) = (1 (0-1 (,,, , e) and y (T) = are obtained A function on these data is then adjusted as a function of the temperature The choice can be made among the following functions: - linear: Y = a + bxT 10 - quadratic: Y = a + bT + cT2 - exponential with two coefficients: Y = a exp (bT) - exponential with three coefficients: Y = a + bx exp (c T) - constant: Y = a 15 Figure 6 gives an example of implementation of the second phase of the method of determination of the dispersion of the tenacity according to the invention, said phase being applied after selection of the functional forms As a reminder, the first phase aims to select a parametric structure and the second phase has to objective of determining the value of the parameters of this structure.In this example, two steps p to adjust the functional forms of the three parameters of Weibull's law. A step 600 makes it possible to determine the intervals of variation of the coefficients of the functional forms and a step 601 implements an overall adjustment of these parameters. Step 600 makes it possible to define intervals of variation of the coefficients of the functional forms representative of the parameters of the Weibull law. To determine the variation range of the coefficients of the functional models of the three parameters of the Weibull law to be adjusted, a bootstrap statistical inference method is used. This method can be implemented as described below. Let N be the size of the database, a remittance draw of size N is performed in the database of tenacity. The operation is renewed until you get NBoot replicas from the database. On each replicate, a subsampling is performed as well as a local estimate of the parameters and an adjustment of these estimates as a function of the temperature. N Boot values are then obtained for each model coefficient of the three parameters. To obtain the bounds Vp ,,, Upl of the variation interval of each coefficient c of the model of the parameter p, the following formulas can be used: = -3 * (q3 q1) Upc = + 3 * (q3 q1) with q 'q2 and q3 the first three quartiles of the sample of size 15 N Boot- Next, step 601 global adjustment aims to estimate the coefficients of the functional forms a (T), / 3 (T ) = (1C-K '') (T) and y (T) = Kfn of Weibull's law modeling toughness. To achieve this objective, a maximum likelihood problem involving two types of data must be solved. These two types of data correspond to the regular data of Kic and Kjc- type and the irregular data Kcm and Kjc_h ', treated as statistical censors. In the remainder of the description, the following notations are used: - k (i, T) represents the tenacity data i for the test temperature 25 - 131.7, represents the size of the specimen for the data item i - Bo represents the size of the standard specimen (often set at 25 mm), - N, C represents the number of regular Kic type data and K.ic - Arcm represents the number of Kcm type data; - N JCL represents the number of KJC-limit data. The statistical likelihood of the set of data is then defined by the product: L (Kyne (K0-Kyni)) (k) = (Kni ', "(K0-Kyni)) (kic) xL2 (K ,, a, (Ko-K ', i)) (kcm) (Kyrie a, (K0-Knin)) (k, c_irimit) In which: - L1 (Kynin, o6 (K0 is the product of densities in N1c, regular Kic data and Kjc considered valid tenacity measures: L1 (K ', a, (K0-K',)) (k1c) = n a-1 i = 1 a ko, n Kmin B (i, T) K0 ## EQU1 ## densities in possible Ncm irregular data of type Kcm considered as right-hand censors for Ncm values of expected toughness and obtained: NCM (L2 (Kmin, a, (K 0 -Kmin)) (kcm) = n ex CM (cm ## EQU1 ## where K-k 0 K 0 (K0), K (K0) produces densities in NJCL data considered as right-hand censures for NJCL expected and obtained toughness values: 1. NJCL 1- exp JC-lim (peJC-Iim (K min, a, (K - K min)) (k = - (i _KT) min i = 1 K0 -Kmin Bo a, (Ko -Kin) and Kmin are then replaced in L (K., a, (Ko-K. )) (k) by the functionalities a (T), (Ic K ',') (T) and K (T). It is then necessary to solve an optimization problem consisting in looking for the coefficients of the functional ones a (T), (K0 Km ,,,) (T) and Kun (T) such that the likelihood is maximum. For ease of calculation, the log likelihood can be used. On the other hand, Kun (T) must be less than the minimum of the data of tenacity of the database whatever I and all the parameters of the law of Weibull must be positive, which imposes to solve the problem taking into account these constraints. The estimates of the coefficients (a1), j) and (e1) of the functional elements of a (T), (Ko-K'e) and Kyrun (T) are then sought by solving the problem of constrained optimization according to I max log L (K n + (T), a (T), (K0-K.) (T)) (k) ai, bi, ci such that a (T)> 0, (K0-Kinin) (T) )> 0, K inui (T)> 0 (11) Knii, i (T) <mino, DE '(k 0, T) "P) K min (1715 517 p ce) (F1' - - - '7 This problem is then solved by a method based on a genetic algorithm by adapting the procedure of calculation of the adaptation to take into account the constraints, as for example described in the book of Mr. Mitchell titled An Introduction to Genetic Algorithm, MIT Press, 1996, pp. 35-117. This algorithm is based on the definition of a population of Np individuals.Each individual represents a point in the state space.It is characterized by a set of genes corresponding to the values of the variables to be estimated and an adaptation corresponding to the value of the criterion to be The algorithm then iteratively generates populations on which selection and mutation processes are applied which aim to ensure an efficient exploration of the state space. [0016] The evolution of the whole of individuals over several generations makes it possible to reach the optima of the problem of optimization treated. The whole process is a constant population size and each iteration is called generation by analogy with genetics. A population initially built by a random draw evolves from a generation k to a generation k + 1 by applying the following steps to individuals (figure 7): - an evaluation stage 700 calculating the adaptation of individuals to the solving the problem: the log-likelihood L (Knun, oe, (K0k)) (k) is calculated for the genes corresponding to the values of the coefficients of the functional elements a (T), (1-K ,,,, ') (T) and Km,' (T). However, if the genes of an individual do not satisfy the constraints stated in equation 11, then its adaptation is fixed at -. a selection step 701 designating the individuals who, with respect to their adaptation, are the most apt to survive and transmit their genes. a crossing step 702 making it possible to mix the genes of two individuals to give two individual sons intended to replace them. a mutation step 703 modifying a gene for certain randomly drawn individuals. Stopping the algorithm can be done when the population stops evolving or for a fixed number of generations. The individual with the greatest adaptation of the final population then corresponds to a solution of the problem. Fig. 8 is a diagram illustrating in a simplified manner a validation technique of the adjusted functional model. 15 The probabilistic evaluation can only be considered relevant if the adjusted model is validated beforehand. The current state of the art does not propose an adjustment test in the particular case where the variable in question is indexed by other variables. On the other hand, when the adjusted model has been statistically validated, it is useful to have criteria for evaluating it with respect to the number of parameters and data used. Thus, it is proposed firstly to apply a test of the conditional 801 to the adjusted model 800. Then, if the adjusted model 800 is not rejected 802 following the application of the test 801, to determine at least one criterion 803. The test of the conditional is described below. In the following, the tenacity and the temperature are respectively represented by the random variables Y and X, respectively. It should be noted that the data pairs (17 (q), X (q)) are mutually independent for any q = 1,, n. [0017] It is proposed here to use an innovative and robust test to treat the case of a variable indexed by other variables. For this purpose, the space in which Y varies (the temperature) is divided into L classes Si (Figure 9) then the observed numbers Nobsi (number of observations Yi in the set Si are compared to the theoretical numbers N'pi ( number of observations Yi expected in the set Si) In Figure 9, the curve 900 represents the density of YIT1, the curve 901 represents the density of YITi and the curve 902 represents the density of YIT1. (17 e SI), is obtained the vector Z defined by: and for all n sufficiently large, is defined the centered Gaussian vector of covariance matrix F: ((K n2 PI - 1, kk = 1 qk j P1, kP2, kk = 1 qk KPL, kk = 1 qk P 2, k PL, kk = 1 qk - - - K 2 PL, k PL - k = 1 qk F = n KK 2 P2, k P 1, k P2, k P2 Ld k = 1 qk k = 1 qk PL, kP1, k PL, kP 2, kk = 1 qk k = 1 qk (12) where: - K is the number of distinct values of X; - pik = P (17 E If, X = xk) E {1, ..., L} and Vk E Kl; - qk = 13 (X = xk) The process consists of defining a test statistic whose proba law bilist is known, which is the case for the following statistic: (where Q is the dimension of the vector U) for any non-Gaussian vector U of covariance matrix E. However, this result is not directly applicable in our case since the vector Z is bound by the relation: LL = 1 (AT obs npi) = 0 1 = 1 1 = 1 Nevertheless, by noting z * the vector containing only the first L -1 components of Z and r the matrix of covariance of z *, that is to say, the matrix 1- deprived of the last column and the last line, it is deduced: t (r), --- ez1L, (13) 20 The test obtained corresponds then the rejection of hypothesis H at threshold a if the test statistic Z '(le) 1 is greater than zi_a (statistic of for L -1 degrees of freedom). In the context that interests us, rejecting the hypothesis H then amounts to rejecting the modeling of fyix (Y, x) by h (y, x) since: 10 h (y, = fyix (y, x) fy (Y ) = fyix (y, x) px (x) (14) XES2X and hence: fy (Y) # 1 fyix (y, x) Px (x) h (y, x) # fyix (y, x) x. 52 X (15) It should be noted that in the particular case where x is a singleton, the fit test we propose is equivalent to performing a fit test of the classical since, in this case, qk = 1, and pu = pl for all 1,1, ..., L. The matrix r is written as follows: ((1-Pi) Pi-PiP2-PIPI, F = n-P2Pi (1- P2) P2 '- - In this case, PLi F corresponds to the covariance matrix associated with a conventional fit test. [0018] If the adjusted model 800 is not rejected 802 following the application of the test 801, at least one information criterion 803 of the toughness model is determined. The methodology presented makes it possible to produce several statistical models for the same database. Indeed, according to the functional model selected for each of the three parameters of the Weibull law, it is possible to obtain a different model with a different number of variables, that is, coefficients of the functional forms. Then there is the problem of prioritizing these models according to relevant criteria. For this, we propose to use three criteria: the maximum likelihood of the model relative to the database; the AIC criterion (Akaike Information Criterion); the BIC (Bayesian Information Criterion) criterion. [0019] The AIC criterion uses the maximum likelihood logarithm and the number of model parameters. It penalizes models with too many variables, namely, those that over-learn and generalize badly: AIC = -2 log (L (0)) + 2k with 6, the vector of model parameters and k the number of parameters or variables of the model. model. The best performing model is the one with the lowest AIC. The BIC criterion further penalizes overparameterization with respect to the number of data: BIC = -21og (Le) + k log (n). In the same way as for the AIC criterion, the best performing model 20 is the one with the lowest BIC. FIG. 10 shows an embodiment of the invention comprising the optional steps of sub-sampling, local estimation, determination of the variation intervals of the coefficients of the functional forms. In a first step, a step 1000 is carried out so as to acquire tenacity measurements made on a plurality of test pieces taken from a steel product, to acquire the size of the specimens on which said measurements were made and to acquire the temperature corresponding to each measure of toughness. This step corresponds to the step 300 described above. [0020] Then, a step 1001 of sub-sampling the toughness measurements is followed by a local estimation step 1002. These steps correspond respectively to the steps 500 and 501 described above. [0021] A step 1004 then makes it possible to determine the variation intervals of the coefficients of the functional forms and a step 1005 implements an overall adjustment of these parameters. These steps correspond respectively to steps 600 and 601 described above. A step 1007 implements a test of, 1; conditional corresponding to step 801 described above. A step 1006 determines the brittle-ductile transition temperature. For this, the value of the temperature for which the average dispersion model of the toughness takes a reference value of 100 MPa.m1 "2 is identified. [0022] FIG. 11 shows a computing device that can implement the method for assigning temporary authentication data. This device comprises a central processing unit (CPU) 1101 connected to an internal communication bus 1100. Random access memory (RAM) 1107 is also connected to the BUS. The device further comprises a mass storage device controller 1102 whose function is to manage accesses to a mass memory, such as a hard disk 1103. The mass memory stores computer program instructions and data. enabling the implementation of the process for assigning temporary authentication data. The mass memory can be composed of all forms of non-volatile memory, such as for example EPROMs, EEPROMs, flash memories, magnetic disks such as internal hard disks and removable disks, magneto-optical disks, and The device also includes a network adapter 1105 managing access to a telecommunications network 1106. [0023] Optionally, the device may also include haptic equipment 1109 such as a cursor control device, keyboard, or other similar equipment. Cursor control equipment can thus be used in the device to allow the user to position a cursor at a given location on a screen 1108. In addition, the cursor control device allows the user to select various commands. and generate control signals. The cursor control device may be a mouse, one of the buttons of said mouse being used to trigger the generation of the input signals.
权利要求:
Claims (20) [0001] CLAIMS 1- A computer-implemented method for determining the dispersion of the toughness of a steel product subjected to thermal variations, comprising: a step (300) for acquiring tenacity measurements made on a plurality of samples taken on at least one steel product, for acquiring the size of the specimens on which said measurements have been made, of acquiring the temperature corresponding to each measurement of toughness; a step (301) for selecting the functional forms a (T), / 3 (T), y (T) of the parameters of a Weibull law representative of the dispersion of the toughness of the steel for a given temperature T a given functional form being defined using at least one coefficient; a step (302) of estimating the coefficients of the functional forms a (T), / 3 (T), y (T) of the Weibull model so as to maximize the likelihood for the data obtained at the acquisition step , said likelihood being representative of the adequacy between the distribution observed on said data and a theoretical distribution to be adjusted corresponding to the functional forms a (T), / 3 (T), y (T). [0002] 2. The method as claimed in claim 1, characterized in that the functional forms are chosen by using the results obtained (304) following the application of the acquisition step (300). [0003] 3. The method as claimed in claim 2, comprising a step (500) of sub-sampling of the tenacity measurements obtained in the measurement acquisition step (300) so as to estimate the parameters of the Weibull law over temperature intervals. predefined. [0004] 4. The method of claim 3 wherein the parameters of the Weibull law are estimated for a temperature range identified in the subsampling step (500) relative to a reference temperature T'f associated with this interval. [0005] 5. The method of claim 4 wherein the reference temperature T'f of an interval is the average temperature of said interval. [0006] 6. The method of claim 4 wherein the reference temperature T'f of an interval is the median temperature of said interval. [0007] 7- Method according to one of claims 3 to 6 wherein a sequential subsampling is implemented so that each subsample of data corresponds to the data associated with a sub-temperature interval. [0008] 8. Method according to one of claims 3 to 7 wherein sliding subsampling is implemented so that each subsample of data corresponds to the data associated with a sub-temperature interval. [0009] 9- Method according to one of claims 3 to 8 comprising a step (501) of local estimates of the three parameters a (T'f), p (T'f), KT'f) of Weibull's law on each reference temperature sub-range T'f obtained by sub-sampling. [0010] 10- The method of claim 9 wherein the local estimate (501) is implemented using the method of moments. [0011] 11. The method of claim 9 wherein the local estimate (501) is implemented using the maximum likelihood method. [0012] The method of claim 9 wherein the local estimate (501) is implemented using a combination of the method of moments and the method of maximum likelihood. [0013] 13- Method according to one of claims 9 to 12 wherein the step of selecting the functional forms (301, 502) is implemented on the basis of k estimates of a, / 3 and y, a function on these data being for that purpose adjusted according to the temperature, said function being chosen from a linear constant function, a quadratic function, an exponential function with two coefficients or an exponential function with three coefficients. [0014] 14- Method according to one of the preceding claims comprising a step (600) for determining the intervals of variation of the coefficients of the functional forms identified in step (301) for selecting the functional forms a (T), / 3 (T) , y (T), these intervals being determined by a bootstrap statistical inference method. [0015] 15- Method according to one of the preceding claims comprising a step of evaluating the adjusted functional model relative to the number of parameters and data used, said model being the result of the estimation of the coefficients (302), the evaluation of this model being carried out in it application a test of the conditional i (801). [0016] 16- Method according to one of the preceding claims comprising a step of determining the brittle-ductile transition temperature, said temperature corresponding to the value of the temperature obtained when the average dispersion model of the toughness is equal to a reference value of 100 MPa.m1 "2. [0017] 17- Method according to one of the preceding claims wherein the coefficients of the functional forms a (T), / 3 (T), y (T) of the Weibull model are estimated by maximizing a cost function, said function corresponding to the log-likelihood of the Weibull theoretical model applied to previously acquired tenacity measurement data (300), the Weibull model being characterized by three parameters defined by the functional forms a (T), / 3 (T), y (T) defined from local estimates made on a sub-sampling with respect to temperature. [0018] 18- Method according to one of the preceding claims wherein the step (300) acquisition also comprises an acquisition of the level of relevance of other acquired data, these levels being chosen to indicate that the data are valid, censored to left or censored right. [0019] 19- Electronic system comprising at least one hardware module implementing the method according to one of the preceding claims. [0020] 20- computer program comprising instructions for executing the method according to one of claims 1 to 18, when the program is executed by a data processing module.
类似技术:
公开号 | 公开日 | 专利标题 FR3020681A1|2015-11-06|METHOD FOR DETERMINING THE DISPERSION OF THE TENACITY AND THE TEMPERATURE OF FRAGILE-DUCTILE TRANSITION OF A STEEL PRODUCT SUBJECT TO THERMAL VARIATIONS Jie et al.2017|Flow behavior of Al–6.2 Zn–0.70 Mg–0.30 Mn–0.17 Zr alloy during hot compressive deformation based on Arrhenius and ANN models Liu et al.2020|Predicting creep rupture life of Ni-based single crystal superalloys using divide-and-conquer approach based machine learning Skibicki2014|Phenomena and computational models of non-proportional fatigue of materials Che et al.2018|An integrated Johnson–Cook and Zerilli–Armstrong model for material flow behavior of Ti–6Al–4V at high strain rate and elevated temperature Chadha et al.2018|An approach to develop Hansel–Spittel constitutive equation during ingot breakdown operation of low alloy steels Cao et al.2017|Predicting the popularity of online content with group-specific models Wang et al.2017|Parameter identification of GTN model using response surface methodology for high-strength steel BR1500HS Murua et al.2018|Feature extraction-based prediction of tool wear of Inconel 718 in face turning KR102098023B1|2020-04-07|Apparatus for setting temperature of continuous casting device Santos et al.2009|Optimising machine-learning-based fault prediction in foundry production Christopher et al.2019|Prediction of long-term creep behaviour of Grade 91 steel at 873 K in the framework of microstructure-based creep damage mechanics approach Perez-Valiente et al.2014|Identification of reservoir analogues in the presence of uncertainty Pan et al.2021|A mechanistic and stochastic approach to fatigue crack nucleation in coarse grain RR1000 using local stored energy James et al.2011|Provisional results for an experimental investigation into the effect of combined primary and secondary stresses when considering the approaches of R6 and the recently developed g | method Kudrya et al.2015|Measurement of nonuniformity of fracture in structural steels with heterogeneous structure Santos et al.2010|Machine-learning-based defect prediction in highprecision foundry production Savković et al.2020|Optimization of machining parameters using the Taguchi and ANOVA analysis in the face milling of aluminum alloys AL7075 Reyad et al.2017|E-Bayesian and hierarchical Bayesian estimations based on dual generalized order statistics from the inverse Weibull model Prater et al.2015|Characterization of machine variability and progressive heat treatment in selective laser melting of inconel 718 Van Gelderen et al.2015|Monte Carlo Simulations of the effects of warm pre-stress on the scatter in fracture toughness Nieves et al.2012|Combination of machine-learning algorithms for fault prediction in high-precision foundries Li et al.2014|Extreme value theory-based structural health prognosis method using reduced sensor data Nasim et al.2019|Inverse Optimization to Design Processing Paths to Tailor Formability of Mg Alloys Gupta et al.2015|3D creep cavitation characteristics and residual life assessment in high temperature steels: a critical review
同族专利:
公开号 | 公开日 WO2015165962A1|2015-11-05| FR3020681B1|2018-02-16|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 WO2013086933A1|2011-12-13|2013-06-20|华东理工大学|Method for calibration of parameters assessing brittle fracture of material based on beremin model|CN108804806A|2018-06-05|2018-11-13|西南交通大学|Weibull is distributed the simplification MLE methods of parameter in combined stress CA model| KR101899690B1|2016-12-23|2018-09-17|주식회사 포스코|Method and Apparatus for Optimizing Production Conditions of Plate Using Standardization of DWTT Shear Area Data| FR3097643B1|2019-06-18|2021-06-25|Commissariat A Lenegie Atomique Et Aux Energies Alternatives|Method for quantifying a species present by accumulation or fixation in a medium| CN112613161A|2020-11-30|2021-04-06|攀钢集团西昌钢钒有限公司|Heat balance calculation method for semisteel steelmaking and application|
法律状态:
2015-04-30| PLFP| Fee payment|Year of fee payment: 2 | 2015-11-06| PLSC| Search report ready|Effective date: 20151106 | 2016-04-28| PLFP| Fee payment|Year of fee payment: 3 | 2017-04-28| PLFP| Fee payment|Year of fee payment: 4 | 2018-04-26| PLFP| Fee payment|Year of fee payment: 5 | 2019-04-29| PLFP| Fee payment|Year of fee payment: 6 | 2020-04-30| PLFP| Fee payment|Year of fee payment: 7 | 2021-04-29| PLFP| Fee payment|Year of fee payment: 8 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1453929A|FR3020681B1|2014-04-30|2014-04-30|METHOD OF DETERMINING THE DISPERSION OF THE TENACITY AND THE TEMPERATURE OF FRAGILE-DUCTILE TRANSITION OF A STEEL PRODUCT SUBJECT TO THERMAL VARIATIONS| FR1453929|2014-04-30|FR1453929A| FR3020681B1|2014-04-30|2014-04-30|METHOD OF DETERMINING THE DISPERSION OF THE TENACITY AND THE TEMPERATURE OF FRAGILE-DUCTILE TRANSITION OF A STEEL PRODUCT SUBJECT TO THERMAL VARIATIONS| PCT/EP2015/059334| WO2015165962A1|2014-04-30|2015-04-29|Method for determining the strength distribution and the ductile-brittle transition temperature of a steel product subjected to thermal variations| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|